max-cut problem
Deep k-grouping: An Unsupervised Learning Framework for Combinatorial Optimization on Graphs and Hypergraphs
Bai, Sen, Yang, Chunqi, Bai, Xin, Zhang, Xin, Jiang, Zhengang
Along with AI computing shining in scientific discovery, its potential in the combinatorial optimization (CO) domain has also emerged in recent years. Yet, existing unsupervised neural network solvers struggle to solve $k$-grouping problems (e.g., coloring, partitioning) on large-scale graphs and hypergraphs, due to limited computational frameworks. In this work, we propose Deep $k$-grouping, an unsupervised learning-based CO framework. Specifically, we contribute: Novel one-hot encoded polynomial unconstrained binary optimization (OH-PUBO), a formulation for modeling k-grouping problems on graphs and hypergraphs (e.g., graph/hypergraph coloring and partitioning); GPU-accelerated algorithms for large-scale k-grouping CO problems. Deep $k$-grouping employs the relaxation of large-scale OH-PUBO objectives as differentiable loss functions and trains to optimize them in an unsupervised manner. To ensure scalability, it leverages GPU-accelerated algorithms to unify the training pipeline; A Gini coefficient-based continuous relaxation annealing strategy to enforce discreteness of solutions while preventing convergence to local optima. Experimental results demonstrate that Deep $k$-grouping outperforms existing neural network solvers and classical heuristics such as SCIP and Tabu.
Investigating layer-selective transfer learning of QAOA parameters for Max-Cut problem
Venturelli, Francesco Aldo, Das, Sreetama, Caruso, Filippo
Quantum approximate optimization algorithm (QAOA) is a variational quantum algorithm (VQA) ideal for noisy intermediate-scale quantum (NISQ) processors, and is highly successful for solving combinatorial optimization problems (COPs). It has been observed that the optimal variational parameters obtained from one instance of a COP can be transferred to another instance, producing sufficiently satisfactory solutions for the latter. In this context, a suitable method for further improving the solution is to fine-tune a subset of the transferred parameters. We numerically explore the role of optimizing individual QAOA layers in improving the approximate solution of the Max-Cut problem after parameter transfer. We also investigate the trade-off between a good approximation and the required optimization time when optimizing transferred QAOA parameters. These studies show that optimizing a subset of layers can be more effective at a lower time-cost compared to optimizing all layers.
- Europe > San Marino > Fiorentino > Fiorentino (0.05)
- Europe > Italy (0.05)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Asia > Japan > Shikoku > Ehime Prefecture > Matsuyama (0.04)
Transform then Explore: a Simple and Effective Technique for Exploratory Combinatorial Optimization with Reinforcement Learning
Pu, Tianle, Fan, Changjun, Shen, Mutian, Lu, Yizhou, Zeng, Li, Nussinov, Zohar, Chen, Chao, Liu, Zhong
Many complex problems encountered in both production and daily life can be conceptualized as combinatorial optimization problems (COPs) over graphs. Recent years, reinforcement learning (RL) based models have emerged as a promising direction, which treat the COPs solving as a heuristic learning problem. However, current finite-horizon-MDP based RL models have inherent limitations. They are not allowed to explore adquately for improving solutions at test time, which may be necessary given the complexity of NP-hard optimization tasks. Some recent attempts solve this issue by focusing on reward design and state feature engineering, which are tedious and ad-hoc. In this work, we instead propose a much simpler but more effective technique, named gauge transformation (GT). The technique is originated from physics, but is very effective in enabling RL agents to explore to continuously improve the solutions during test. Morever, GT is very simple, which can be implemented with less than 10 lines of Python codes, and can be applied to a vast majority of RL models. Experimentally, we show that traditional RL models with GT technique produce the state-of-the-art performances on the MaxCut problem. Furthermore, since GT is independent of any RL models, it can be seamlessly integrated into various RL frameworks, paving the way of these models for more effective explorations in the solving of general COPs.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Tennessee > Anderson County > Oak Ridge (0.04)
- North America > United States > Missouri > St. Louis County > St. Louis (0.04)
- (2 more...)
- Research Report > Experimental Study (0.46)
- Overview > Innovation (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.34)
Latent Random Steps as Relaxations of Max-Cut, Min-Cut, and More
Chanpuriya, Sudhanshu, Musco, Cameron
Algorithms for node clustering typically focus on finding homophilous structure in graphs. That is, they find sets of similar nodes with many edges within, rather than across, the clusters. However, graphs often also exhibit heterophilous structure, as exemplified by (nearly) bipartite and tripartite graphs, where most edges occur across the clusters. Grappling with such structure is typically left to the task of graph simplification. We present a probabilistic model based on non-negative matrix factorization which unifies clustering and simplification, and provides a framework for modeling arbitrary graph structure. Our model is based on factorizing the process of taking a random walk on the graph. It permits an unconstrained parametrization, allowing for optimization via simple gradient descent. By relaxing the hard clustering to a soft clustering, our algorithm relaxes potentially hard clustering problems to a tractable ones. We illustrate our algorithm's capabilities on a synthetic graph, as well as simple unsupervised learning tasks involving bipartite and tripartite clustering of orthographic and phonological data.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.05)
- Asia > Middle East > Jordan (0.04)
Efficiently Solve the Max-cut Problem via a Quantum Qubit Rotation Algorithm
Optimizing parameterized quantum circuits promises efficient use of near-term quantum computers to achieve the potential quantum advantage. However, there is a notorious tradeoff between the expressibility and trainability of the parameter ansatz. We find that in combinatorial optimization problems, since the solutions are described by bit strings, one can trade the expressiveness of the ansatz for high trainability. To be specific, by focusing on the max-cut problem we introduce a simple yet efficient algorithm named Quantum Qubit Rotation Algorithm (QQRA). The quantum circuits are comprised with single-qubit rotation gates implementing on each qubit. The rotation angles of the gates can be trained free of barren plateaus. Thus, the approximate solution of the max-cut problem can be obtained with probability close to 1. To illustrate the effectiveness of QQRA, we compare it with the well known quantum approximate optimization algorithm and the classical Goemans-Williamson algorithm.
Clustering with Penalty for Joint Occurrence of Objects: Computational Aspects
The idea is to minimize the occurrence of multiple objects from the same cluster in the same set. In the current paper, we study computational aspects of the method. First, we prove that the problem of finding the optimal clustering is NP-hard. Second, to numerically find a suitable clustering, we propose to use the genetic algorithm augmented by a renumbering procedure, a fast task-specific local search heuristic and an initial solution based on a simplified model. Third, in a simulation study, we demonstrate that our improvements of the standard genetic algorithm significantly enhance its computational performance.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Clustering (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (0.71)
Exploratory Combinatorial Optimization with Reinforcement Learning
Barrett, Thomas D., Clements, William R., Foerster, Jakob N., Lvovsky, A. I.
Many real-world problems can be reduced to combinatorial optimization on a graph, where the subset or ordering of vertices that maximize some objective function must be found. With such tasks often NP-hard and analytically intractable, reinforcement learning (RL) has shown promise as a framework with which efficient heuristic methods to tackle these problems can be learned. Previous works construct the solution subset incrementally, adding one element at a time, however, the irreversible nature of this approach prevents the agent from revising its earlier decisions, which may be necessary given the complexity of the optimization task. We instead propose that the agent should seek to continuously improve the solution by learning to explore at test time. Our approach of exploratory combinatorial optimization (ECO-DQN) is, in principle, applicable to any combinatorial problem that can be defined on a graph. Experimentally, we show our method to produce state-of-the-art RL performance on the Maximum Cut problem. Moreover, because ECO-DQN can start from any arbitrary configuration, it can be combined with other search methods to further improve performance, which we demonstrate using a simple random search.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Tennessee > Anderson County > Oak Ridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
Highly parallel algorithm for the Ising ground state searching problem
Yavorsky, A., Markovich, L. A., Polyakov, E. A., Rubtsov, A. N.
Finding an energy minimum in the Ising model is an exemplar objective, associated with many combinatorial optimization problems, that is computationally hard in general, but occurs in all areas of modern science. There are several numerical methods, providing solution for the medium size Ising spin systems. However, they are either computationally slow and badly parallelized, or do not give sufficiently good results for the large systems. In this paper, we present a highly parallel algorithm, called Mean-field Annealing from a Random State (MARS), incorporating the best features of the classical simulated annealing (SA) and Mean-Field Annealing (MFA) methods. The algorithm is based on the mean-field descent from a randomly selected configuration and temperature. Since a single run requires little computational effort, the effectiveness can be achieved by massive parallelisation. MARS shows excellent performance both on the large Ising spin systems and on the set of exemplary maximum cut benchmark instances in terms of both solution quality and computational time.
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.05)
- Asia > Russia (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (4 more...)